Now, do not think of it in terms of fixing the good idea’s argument, please. Treat it as evidence that the idea is, actually, bad, and process it as to make a better idea—which may or may not coincide with original idea. You can’t right now know if your idea is in fact good or not—rather than fixing you should make a new idea. To do anything else is not rationality. It is rationalization. It is to become even more wrong by making even more privileged hypotheses, and make even worse impression on the engineers whom you try to convince.
You seem to be grossly overvaluing the weight we should place on your personal testimony as a “Software Developer”. It is most certainly not ‘irrational’ to not abandon an idea simply because you say so very frequently and very assertively. About half the people here are software developers, many more are mathematicians as well. I’ve also seen the intellectual work some of them output—which is what you declared we should evaluate people on—and it is orders of magnitude more impressive than impressive than what we have seen from you.
It does not require gross failures of rationality to not update drastically and abandon ideas based on one anecdote of a software developer with little knowledge of this field making ultimatums. This is, indeed “evidence that the idea is, actually, bad” but it is overwhelmingly weak evidence and it would be a mistake to treat it as more.
Really? On what scale? Absolute productivity in dollar terms, or minds affected? Rarity in the population? This sounds like hyperbole.
Pardon me—I was referring somewhat more specifically to direct intellectual/academic output, not graphics based software development projects. I know very little about how many dollars have been earned by the people in question (and can’t say I have ever been all that curious).
About half the people here are software developers, many more are mathematicians as well. I’ve also seen the intellectual work some of them output—which is what you declared we should evaluate people on—and it is orders of magnitude more impressive than impressive than what we have seen from you.
There are more people who have done more impressive work who disagree with AI risk. If I was only going to judge AI risk based on who advocates it, then worrying about AI risk is clearly mistaken.
I guess the point was that if we are going to consider “software developer output” even as a weak evidence in this debate, why consider Eliezer’s output, and not the best output of people who agree with him?
An analogy: Imagine that there is a mathematical problem. A twelve-years old child solves the problem and says “x = 10”. Then a university professor of mathematics looks at the problem and the solution and says “indeed, you are right”. Taken this story as a whole, would you judge the “x = 10” hypothesis by credentials of the child, or of the professor? Further, imagine that another university professor of mathematics looks at the problem and says “actually, this is wrong, x = 12“; and then the two professors start a long discussion whether “x = 10” or “x = 12”. Again, would you frame this debate as a “child versus professor” debate or as a “professor against professor” debate?
The point is, the argument “I am impressive software developer and I say EY is wrong, and EY is not an impressive software developer” is weakened by saying “well there are other impressive software developers that say EY is right”.
There are more people who have done more impressive work who disagree with AI risk. If I was only going to judge AI risk based on who advocates it, then worrying about AI risk is clearly mistaken.
It probably depends on your values. Most people are more worried by being hit by a car than they are about being eaten by a superintelligent machine. With common values, their beliefs about which issue is more important are absolutely justified.
For every one of those people you can have one, or ten, or a hundred, or a thousand, that dismissed your cause. Don’t go down this road for confirmation, that’s how self reinforcing cults are made.
For every one of those people you can have one, or ten, or a hundred, or a thousand, that dismissed your cause. Don’t go down this road for confirmation, that’s how self reinforcing cults are made.
I didn’t go down any road for confirmation. I put your single testimony in a more realistic perspective. Not believing one person who seems to have a highly emotional agenda isn’t ‘cultish’, it’s just practical.
I didn’t go down any road for confirmation. I put your single testimony in a more realistic perspective. Not believing one person who seems to have a highly emotional agenda isn’t ‘cultish’, it’s just practical.
I think you grossly overestimate how much emotional agenda can disagreement with counterfactual people produce.
I didn’t go down any road for confirmation. I put your single testimony in a more realistic perspective. Not believing one person who seems to have a highly emotional agenda isn’t ‘cultish’, it’s just practical.
I think you grossly overestimate how much emotional agenda can disagreement with counterfactual people produce.
This doesn’t make sense as a reply to the context. I’m not sure it makes any sense as a matter of English grammar either.
You seem to be grossly overvaluing the weight we should place on your personal testimony as a “Software Developer”. It is most certainly not ‘irrational’ to not abandon an idea simply because you say so very frequently and very assertively. About half the people here are software developers, many more are mathematicians as well. I’ve also seen the intellectual work some of them output—which is what you declared we should evaluate people on—and it is orders of magnitude more impressive than impressive than what we have seen from you.
It does not require gross failures of rationality to not update drastically and abandon ideas based on one anecdote of a software developer with little knowledge of this field making ultimatums. This is, indeed “evidence that the idea is, actually, bad” but it is overwhelmingly weak evidence and it would be a mistake to treat it as more.
Good point.
Really? On what scale? Absolute productivity in dollar terms, or minds affected? Rarity in the population? This sounds like hyperbole.
Pardon me—I was referring somewhat more specifically to direct intellectual/academic output, not graphics based software development projects. I know very little about how many dollars have been earned by the people in question (and can’t say I have ever been all that curious).
There are more people who have done more impressive work who disagree with AI risk. If I was only going to judge AI risk based on who advocates it, then worrying about AI risk is clearly mistaken.
I guess the point was that if we are going to consider “software developer output” even as a weak evidence in this debate, why consider Eliezer’s output, and not the best output of people who agree with him?
An analogy: Imagine that there is a mathematical problem. A twelve-years old child solves the problem and says “x = 10”. Then a university professor of mathematics looks at the problem and the solution and says “indeed, you are right”. Taken this story as a whole, would you judge the “x = 10” hypothesis by credentials of the child, or of the professor? Further, imagine that another university professor of mathematics looks at the problem and says “actually, this is wrong, x = 12“; and then the two professors start a long discussion whether “x = 10” or “x = 12”. Again, would you frame this debate as a “child versus professor” debate or as a “professor against professor” debate?
The point is, the argument “I am impressive software developer and I say EY is wrong, and EY is not an impressive software developer” is weakened by saying “well there are other impressive software developers that say EY is right”.
It probably depends on your values. Most people are more worried by being hit by a car than they are about being eaten by a superintelligent machine. With common values, their beliefs about which issue is more important are absolutely justified.
For every one of those people you can have one, or ten, or a hundred, or a thousand, that dismissed your cause. Don’t go down this road for confirmation, that’s how self reinforcing cults are made.
I didn’t go down any road for confirmation. I put your single testimony in a more realistic perspective. Not believing one person who seems to have a highly emotional agenda isn’t ‘cultish’, it’s just practical.
I think you grossly overestimate how much emotional agenda can disagreement with counterfactual people produce.
edit: botched the link.
This doesn’t make sense as a reply to the context. I’m not sure it makes any sense as a matter of English grammar either.